A parallelizable augmented Lagrangian method applied to large-scale non-convex-constrained optimization problems

نویسندگان

  • Natashia Boland
  • Jeffrey Christiansen
  • Brian Dandurand
  • Andrew Eberhard
  • Fabricio Oliveira
چکیده

We contribute improvements to a Lagrangian dual solution approach applied to large-scale optimization problems whose objective functions are convex, continuously differentiable and possibly nonlinear, while the nonrelaxed constraint set is compact but not necessarily convex. Such problems arise, for example, in the split-variable deterministic reformulation of stochastic mixed-integer optimization problems. The dual solution approach needs to address the nonconvexity of the non-relaxed constraint set while being efficiently implementable in parallel. We adapt the augmented Lagrangian method framework to address the presence of nonconvexity in the non-relaxed constraint set and the need for efficient parallelization. The development of our approach is most naturally compared with the development of proximal bundle methods and especially with their use of serious step conditions. However, deviations from these developments allow for an improvement in efficiency with which parallelization can be utilized. Pivotal in our modification to the augmented Lagrangian method is the use of an integration of approaches based on This work was supported by the Australian Research Council (ARC) grant ARC DP140100985. N. Boland Georgia Institute of Technology, Atlanta, , USA J. Christiansen RMIT University, Melbourne, Victoria, Australia B. Dandurand RMIT University, Melbourne, Victoria, Australia A. Eberhard RMIT University, Melbourne, Victoria, Australia Tel.: +61-3-9925-2616 Fax: +61-3-9925-1748 E-mail: [email protected] F. Oliveira RMIT University, Melbourne, Victoria, Australia 2 Natashia Boland et al. the simplicial decomposition method (SDM) and the nonlinear block GaussSeidel (GS) method. An adaptation of a serious step condition associated with proximal bundle methods allows for the approximation tolerance to be automatically adjusted. Under mild conditions optimal dual convergence is proven, and we report computational results on test instances from the stochastic optimization literature. We demonstrate improvement in parallel speedup over a baseline parallel approach.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

An interior-point Lagrangian decomposition method for separable convex optimization

In this paper we propose a distributed algorithm for solving large-scale separable convex problems using Lagrangian dual decomposition and the interior-point framework. By adding self-concordant barrier terms to the ordinary Lagrangian we prove under mild assumptions that the corresponding family of augmented dual functions is self-concordant. This makes it possible to efficiently use the Newto...

متن کامل

Distributed Convex Optimization with Many Convex Constraints

We address the problem of solving convex optimization problems with many convex constraints in a distributed setting. Our approach is based on an extension of the alternating direction method of multipliers (ADMM) that recently gained a lot of attention in the Big Data context. Although it has been invented decades ago, ADMM so far can be applied only to unconstrained problems and problems with...

متن کامل

Very large scale optimization by sequential convex programming

We introduce a method for constrained nonlinear programming, that is widely used in mechanical engineering and that is known under the name SCP for sequential convex programming. The algorithm consists of solving a sequence of convex and separable subproblems, where an augmented Lagrangian merit function is used for guaranteeing convergence. Originally, SCP methods were developed in structural ...

متن کامل

Augmented Lagrangian method for solving absolute value equation and its application in two-point boundary value problems

One of the most important topic that consider in recent years by researcher is absolute value equation (AVE). The absolute value equation seems to be a useful tool in optimization since it subsumes the linear complementarity problem and thus also linear programming and convex quadratic programming. This paper introduce a new method for solving absolute value equation. To do this, we transform a...

متن کامل

First-order methods for constrained convex programming based on linearized augmented Lagrangian function

First-order methods have been popularly used for solving large-scale problems. However, many existing works only consider unconstrained problems or those with simple constraint. In this paper, we develop two first-order methods for constrained convex programs, for which the constraint set is represented by affine equations and smooth nonlinear inequalities. Both methods are based on the classic...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016